Skip to content

Conversation

@pvary
Copy link
Contributor

@pvary pvary commented Feb 17, 2025

Here is what the PR does:

  • Created 3 interface classes which are implemented by the file formats:
    • ReadBuilder - Builder for reading data from data files
    • AppenderBuilder - Builder for writing data to data files
    • ObjectModel - Providing ReadBuilders, and AppenderBuilders for the specific data file format and object model pair
  • Updated the Parquet, Avro, ORC implementation for this interfaces, and deprecated the old reader/writer APIs
  • Created interface classes which will be used by the actual readers/writers of the data files:
    • AppenderBuilder - Builder for writing a file
    • DataWriterBuilder - Builder for generating a data file
    • PositionDeleteWriterBuilder - Builder for generating a position delete file
    • EqualityDeleteWriterBuilder - Builder for generating an equality delete file
    • No ReadBuilder here - the file format reader builder is reused
  • Created a WriterBuilder class which implements the interfaces above (AppenderBuilder/DataWriterBuilder/PositionDeleteWriterBuilder/EqualityDeleteWriterBuilder) based on a provided file format specific AppenderBuilder
  • Created an ObjectModelRegistry which stores the available ObjectModels, and engines and users could request the readers (ReadBuilder) and writers (AppenderBuilder/DataWriterBuilder/PositionDeleteWriterBuilder/EqualityDeleteWriterBuilder) from.
  • Created the appropriate ObjectModels:
    • GenericObjectModels - for reading and writing Iceberg Records
    • SparkObjectModels - for reading (vectorized and non-vectorized) and writing Spark InternalRow/ColumnarBatch objects
    • FlinkObjectModels - for reading and writing Flink RowData objects
    • An arrow object model is also registered for vectorized reads of Parquet files into Arrow ColumnarBatch objects
  • Updated the production code where the reading and writing happens to use the ObjectModelRegistry and the new reader/writer interfaces to access data files
  • Kept the testing code intact to ensure that the new API/code is not breaking anything

@pvary pvary force-pushed the file_Format_api_without_base branch 2 times, most recently from c528a52 to 9975b4f Compare February 20, 2025 09:45
@pvary pvary changed the title WIP: Interface based FileFormat API WIP: Interface based DataFile reader and writer API Feb 20, 2025
Copy link
Contributor

@liurenjie1024 liurenjie1024 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @pvary for this proposal, I left some comments.

@pvary
Copy link
Contributor Author

pvary commented Feb 21, 2025

I will start to collect the differences here between the different writer types (appender/dataWriter/equalityDeleteWriter/positionalDeleteWriter) for reference:

  • Writer context is different between delete and data files. This contains TableProperties/Configurations which could be different between delete and data files. For example for parquet: RowGroupSize/PageSize/PageRowLimit/DictSize/Compression etc. For ORC and Avro we have some similar changing configs
  • Specific writer functions for position deletes to write out the PositionDelete records
  • Positional delete PathTransformFunction to convert writer data type for the path to file format data type

@rdblue
Copy link
Contributor

rdblue commented Feb 22, 2025

While I think the goal here is a good one, the implementation looks too complex to be workable in its current form.

The primary issue that we currently have is adapting object models (like Iceber's internal StructLike, Spark's InternalRow, or Flink's RowData) to file formats so that you can separately write object model to format glue code and have it work throughout support for an engine. I think a diff from the InternalData PR demonstrates it pretty well:

-    switch (format) {
-      case AVRO:
-        AvroIterable<ManifestEntry<F>> reader =
-            Avro.read(file)
-                .project(ManifestEntry.wrapFileSchema(Types.StructType.of(fields)))
-                .createResolvingReader(this::newReader)
-                .reuseContainers()
-                .build();
+    CloseableIterable<ManifestEntry<F>> reader =
+        InternalData.read(format, file)
+            .project(ManifestEntry.wrapFileSchema(Types.StructType.of(fields)))
+            .reuseContainers()
+            .build();
 
-        addCloseable(reader);
+    addCloseable(reader);
 
-        return CloseableIterable.transform(reader, inheritableMetadata::apply);
+    return CloseableIterable.transform(reader, inheritableMetadata::apply);
-
-      default:
-        throw new UnsupportedOperationException("Invalid format for manifest file: " + format);
-    }

This shows:

  • Rather than a switch, the format is passed to create the builder
  • There is no longer a callback passed to create readers for the object model (createResolvingReader)

In this PR, there are a lot of other changes as well. I'm looking at one of the simpler Spark cases in the row reader.

The builder is initialized from DataFileServiceRegistry and now requires a format, class name, file, projection, and constant map:

    return DataFileServiceRegistry.readerBuilder(
            format, InternalRow.class.getName(), file, projection, idToConstant)

There are also new static classes in the file. Each creates a new service and each service creates the builder and object model:

  public static class AvroReaderService implements DataFileServiceRegistry.ReaderService {
    @Override
    public DataFileServiceRegistry.Key key() {
      return new DataFileServiceRegistry.Key(FileFormat.AVRO, InternalRow.class.getName());
    }

    @Override
    public ReaderBuilder builder(
        InputFile inputFile,
        Schema readSchema,
        Map<Integer, ?> idToConstant,
        DeleteFilter<?> deleteFilter) {
      return Avro.read(inputFile)
          .project(readSchema)
          .createResolvingReader(schema -> SparkPlannedAvroReader.create(schema, idToConstant));
    }

The createResolvingReader line is still there, just moved into its own service class instead of in branches of a switch statement.

In addition, there are now a lot more abstractions:

  • A builder for creating an appender for a file format
  • A builder for creating a data file writer for a file format
  • A builder for creating an equality delete writer for a file format
  • A builder for creating a position delete writer for a file format
  • A builder for creating a reader for a file format
  • A "service" registry (what is a service?)
  • A "key"
  • A writer service
  • A reader service

I think that the next steps are to focus on making this a lot simpler, and there are some good ways to do that:

  • Focus on removing boilerplate and hiding the internals. For instance, Key, if needed, should be an internal abstraction and not complexity that is exposed to callers
  • The format-specific data and delete file builders typically wrap an appender builder. Is there a way to handle just the reader builder and appender builder?
  • Is the extra "service" abstraction helpful?
  • Remove ServiceLoader and use a simpler solution. I think that formats could simply register themselves like we do for InternalData. I think it would be fine to have a trade-off that Iceberg ships with a list of known formats that can be loaded, and if you want to replace that list it's at your own risk.
  • Standardize more across the builders for FileFormat. How idToConstant is handled is a good example. That should be passed to the builder instead of making the whole API more complicated. Projection is the same.

@pvary
Copy link
Contributor Author

pvary commented Feb 24, 2025

While I think the goal here is a good one, the implementation looks too complex to be workable in its current form.

I'm happy that we agree with the goals. I created a PR to start the conversation. If there are willing reviewers we can introduce more invasive changes to archive a better API. I'm all for it!

The primary issue that we currently have is adapting object models (like Iceber's internal StructLike, Spark's InternalRow, or Flink's RowData) to file formats so that you can separately write object model to format glue code and have it work throughout support for an engine.

I think we need to keep this direct transformations to prevent the performance loss which would be caused by multiple transformations between object model -> common model -> file format.

We have a matrix of transformation which we need to encode somewhere:

Source Target
Parquet StructLike
Parquet InternalRow
Parquet RowData
Parquet Arrow
Avro ...
ORC ...

[..]

  • Rather than a switch, the format is passed to create the builder
  • There is no longer a callback passed to create readers for the object model (createResolvingReader)

The InternalData reader has one advantage over the data file readers/writers. The internal object model is static for these readers/writers. For the DataFile readers/writers we have multiple object models to handle.

[..]
I think that the next steps are to focus on making this a lot simpler, and there are some good ways to do that:

  • Focus on removing boilerplate and hiding the internals. For instance, Key, if needed, should be an internal abstraction and not complexity that is exposed to callers

If we allow adding new builders for the file formats we can remove a good chunk of the boilerplate code. Let me see how this would look like

  • The format-specific data and delete file builders typically wrap an appender builder. Is there a way to handle just the reader builder and appender builder?

We need to refactor the Avro positional delete write for this, or add a positionalWriterFunc. Also need to consider that the format specific configurations which are different for the appenders and the delete files (DELETE_PARQUET_ROW_GROUP_SIZE_BYTES vs. PARQUET_ROW_GROUP_SIZE_BYTES)

  • Is the extra "service" abstraction helpful?

If we are ok with having a new Builder for the readers/writers, then we don't need the service. It was needed to keep the current APIs and the new APIs compatible.

  • Remove ServiceLoader and use a simpler solution. I think that formats could simply register themselves like we do for InternalData. I think it would be fine to have a trade-off that Iceberg ships with a list of known formats that can be loaded, and if you want to replace that list it's at your own risk.

Will do

  • Standardize more across the builders for FileFormat. How idToConstant is handled is a good example. That should be passed to the builder instead of making the whole API more complicated. Projection is the same.

Will see what could be arcived

@pvary pvary force-pushed the file_Format_api_without_base branch 5 times, most recently from c488d32 to 71ec538 Compare February 25, 2025 16:53
@pvary pvary force-pushed the file_Format_api_without_base branch from ec93457 to 9627db8 Compare January 15, 2026 12:15
@pvary pvary force-pushed the file_Format_api_without_base branch from b03fd34 to 00626e2 Compare January 15, 2026 16:09
@pvary pvary force-pushed the file_Format_api_without_base branch from 00626e2 to 67e706b Compare January 15, 2026 16:45
@pvary pvary force-pushed the file_Format_api_without_base branch from 67e706b to a11216f Compare January 15, 2026 16:55
Copy link

@rashworld-max rashworld-max left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


@Override
public ReadBuilder<D, S> set(String key, String value) {
// Configuration is not used for Avro reader creation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor: since the read builder could support this, I think this should throw an exception to let the caller know that it is not functional.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is equivalent to the Parquet.ReadBuilder.set method, which is used to pass user-provided configuration values to the reader. See how it is used in the benchmark:

.set("org.apache.spark.sql.parquet.row.requested_schema", sparkSchema.json())
.set("spark.sql.parquet.binaryAsString", "false")
.set("spark.sql.parquet.int96AsTimestamp", "false")
.set("spark.sql.caseSensitive", "false")
.set("spark.sql.parquet.fieldId.write.enabled", "false")
.set("spark.sql.parquet.inferTimestampNTZ.enabled", "false")
.set("spark.sql.legacy.parquet.nanosAsLong", "false")

The caller will not know the actual type of the ReadBuilder, so if we throw an exception here, then use-cases will break. That is why the javadoc for the method says:

Reader builders should ignore configuration keys not known for them.

/** Set the projection schema. */
ReadBuilder<D, S> project(Schema schema);

/** Sets the expected output schema. If not provided derived from the {@link #project(Schema)}. */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that this javadoc is very clear because of the reference to project. Calling project sets the Iceberg schema, not the engine schema and the builder won't do anything to create an engine schema from an Iceberg schema.

I think that this should be Sets the engine's representation of the projected schema.

We also do not want callers to use this in place of the Iceberg schema because this is opaque to core code -- Iceberg can't verify the engine schema or convert between the two projection representations. We should have a note that says this schema should match the requested Iceberg projection, but may differ in ways that Iceberg considers equivalent. For example, we may use this to exchange the engine's requested shredded representation for a variant, and we could also use this to pass things like specific classes to use for structs (like we have for our internal object model).

Also, what about only allowing this to be set if project is also set? I don't think it is necessary to do this, but I also can't think of a case where you might set an engine schema but not an Iceberg schema.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a good example of how this is used comes from the write path, where smallint may be passed to an int field. Here, the engine may want to use a bigint value even though the Iceberg schema is an int.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

builder won't do anything to create an engine schema from an Iceberg schema.

I wanted to highlight that setting the outputSchema is not mandatory. If not provided the builder will generate a default representation.
I agree that better doc is needed.

I will reword and add the examples.

I think projection is required anyway, so I updated the javadoc accordingly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the javadoc. Please take another look.
Thanks!

* Sets the input schema accepted by the writer. If not provided derived from the {@link
* #schema(Schema)}.
*/
WriteBuilder<D, S> inputSchema(S schema);
Copy link
Contributor

@rdblue rdblue Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went into a bit of detail on the naming and javadoc for the equivalent method in the read builder. We should do the same things here:

  • Consider using engineSchema rather than "input"
  • Clarify the javadoc: this is the engine schema describing the rows passed to the writer, which may be more specific than the Iceberg schema (for example, tinyint, smallint, and int may be passed when the Iceberg type is int.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the javadoc here too.
Please check


/**
* Sets the input schema accepted by the writer. If not provided derived from the {@link
* #rowSchema(Schema)}.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't true, is it? How would the engine schema be derived from the row schema? The Iceberg schema can be derived from it, but this one can't.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bad wording. The accepted representation will be derived from the schema.

Copied the javadoc from the WriteBuilder
Please check

EqualityDeleteWriteBuilder<D, S> rowSchema(Schema rowSchema);

/** Sets the equality field ids for the equality delete writer. */
default EqualityDeleteWriteBuilder<D, S> equalityFieldIds(List<Integer> fieldIds) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this default go the other way? Usually we translate varargs versions into lists for the actual implementations.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In FileMetadata, equality fields are stored as int[], so I used the target type directly to avoid unnecessary conversions.

WriteBuilder<D, S> withAADPrefix(ByteBuffer aadPrefix);

/** Finalizes the configuration and builds the {@link FileAppender}. */
FileAppender<D> build() throws IOException;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does this throw an IOException when the read path does not?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When file creation fails, ORC wraps the IOException in a RuntimeIOException. Parquet and Avro surface the original IOException. I chose to follow the Parquet and Avro behavior and leave this unchanged.

* Sets the input schema accepted by the writer. If not provided derived from the {@link
* #schema(Schema)}.
*/
DataWriteBuilder<D, S> inputSchema(S schema);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not derived from the write schema, either. And this can be moved to the common write methods, right? I don't see a difference between the two.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the name and the javadoc based on the comments on the WriteBuilder.
We can't move this to the common methods, as we don't have this in the PositionDeleteWriteBuilder

@pvary pvary changed the title Core: Interface based DataFile reader and writer API - PoC Core: Interface based DataFile reader and writer API Jan 23, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants